List of AI News about API keys
| Time | Details |
|---|---|
|
2026-03-13 18:16 |
AI Agent Flags Exposed Databases: Supabase and Firestore Incidents Reveal 222K Emails — Security Analysis and 2026 Lessons
According to @galnagli on X, an AI agent discovered two misconfigured databases—moltbook on Supabase exposing 35K emails and RentAHuman on Firestore exposing 187K emails—both shipped without security rules and fixed before reported harm. As reported by Wiz, the moltbook exposure additionally revealed millions of API keys due to public database access and lack of row-level security, underscoring how rapid prototyping with managed backends can create severe data leakage risks. According to Wiz, enforcing default deny rules, enabling Supabase RLS, and hardening Firebase security rules can reduce blast radius, while integrating automated AI security agents into CI/CD offers a scalable guardrail for startups shipping fast. |
|
2026-01-31 23:44 |
Moltbook Agents: Latest Analysis Reveals Language Creation and Security Risks
According to God of Prompt on Twitter, Moltbook represents the first experiment of deploying autonomous agents in uncontrolled environments, where these agents are observed developing their own communication protocols. However, as reported by God of Prompt and Gal Nagli, the platform's 'vibe coded' architecture has introduced significant security vulnerabilities, including exposure to exploits that could compromise sensitive user data such as emails, login tokens, and API keys for over 1.5 million registered users. The reports emphasize that Moltbook currently lacks robust developer oversight, and caution is advised against integrating external bots until security standards are improved. This situation highlights the critical need for rigorous security practices as AI agents are deployed in open, real-world settings. |
|
2026-01-29 19:06 |
Clawdbot to Moltbot: Chaos Highlights Security Risks in AI Agent Deployments
According to God of Prompt on Twitter, the forced rebranding of Clawdbot to Moltbot following a cease-and-desist from Anthropic over trademark issues led to significant security breaches and financial losses. Scammers quickly hijacked the original handles, launching a fraudulent CLAWD token that spiked to a $16 million market cap before crashing to zero. Users were left with exposed API keys, leaked private conversations, and unexpected $200 per month bills for nonfunctional setups. This situation underscores the critical need for robust security and infrastructure practices in the deployment of AI agents, as reported by God of Prompt. |
